132 research outputs found
Moderate deviations for particle filtering
Consider the state space model (X_t,Y_t), where (X_t) is a Markov chain, and
(Y_t) are the observations. In order to solve the so-called filtering problem,
one has to compute L(X_t|Y_1,...,Y_t), the law of X_t given the observations
(Y_1,...,Y_t). The particle filtering method gives an approximation of the law
L(X_t|Y_1,...,Y_t) by an empirical measure \frac{1}{n}\sum_1^n\delta_{x_{i,t}}.
In this paper we establish the moderate deviation principle for the empirical
mean \frac{1}{n}\sum_1^n\psi(x_{i,t}) (centered and properly rescaled) when the
number of particles grows to infinity, enhancing the central limit theorem.
Several extensions and examples are also studied.Comment: Published at http://dx.doi.org/10.1214/105051604000000657 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Quantitative bounds on convergence of time-inhomogeneous Markov chains
Convergence rates of Markov chains have been widely studied in recent years.
In particular, quantitative bounds on convergence rates have been studied in
various forms by Meyn and Tweedie [Ann. Appl. Probab. 4 (1994) 981-1101],
Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558-566], Roberts and Tweedie
[Stochastic Process. Appl. 80 (1999) 211-229], Jones and Hobert [Statist. Sci.
16 (2001) 312-334] and Fort [Ph.D. thesis (2001) Univ. Paris VI]. In this
paper, we extend a result of Rosenthal [J. Amer. Statist. Assoc. 90 (1995)
558-566] that concerns quantitative convergence rates for time-homogeneous
Markov chains. Our extension allows us to consider f-total variation distance
(instead of total variation) and time-inhomogeneous Markov chains. We apply our
results to simulated annealing.Comment: Published at http://dx.doi.org/10.1214/105051604000000620 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Convergence of adaptive mixtures of importance sampling schemes
In the design of efficient simulation algorithms, one is often beset with a
poor choice of proposal distributions. Although the performance of a given
simulation kernel can clarify a posteriori how adequate this kernel is for the
problem at hand, a permanent on-line modification of kernels causes concerns
about the validity of the resulting algorithm. While the issue is most often
intractable for MCMC algorithms, the equivalent version for importance sampling
algorithms can be validated quite precisely. We derive sufficient convergence
conditions for adaptive mixtures of population Monte Carlo algorithms and show
that Rao--Blackwellized versions asymptotically achieve an optimum in terms of
a Kullback divergence criterion, while more rudimentary versions do not benefit
from repeated updating.Comment: Published at http://dx.doi.org/10.1214/009053606000001154 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive Importance Sampling in General Mixture Classes
In this paper, we propose an adaptive algorithm that iteratively updates both
the weights and component parameters of a mixture importance sampling density
so as to optimise the importance sampling performances, as measured by an
entropy criterion. The method is shown to be applicable to a wide class of
importance sampling densities, which includes in particular mixtures of
multivariate Student t distributions. The performances of the proposed scheme
are studied on both artificial and real examples, highlighting in particular
the benefit of a novel Rao-Blackwellisation device which can be easily
incorporated in the updating scheme.Comment: Removed misleading comment in Section
Maximum Likelihood Estimator for Hidden Markov Models in continuous time
The paper studies large sample asymptotic properties of the Maximum
Likelihood Estimator (MLE) for the parameter of a continuous time Markov chain,
observed in white noise. Using the method of weak convergence of likelihoods
due to I.Ibragimov and R.Khasminskii, consistency, asymptotic normality and
convergence of moments are established for MLE under certain strong ergodicity
conditions of the chain.Comment: Warning: due to a flaw in the publishing process, some of the
references in the published version of the article are confuse
Minimum variance importance sampling via Population Monte Carlo
International audienceVariance reduction has always been a central issue in Monte Carlo experiments. Population Monte Carlo can be used to this effect, in that a mixture of importance functions, called a D-kernel, can be iteratively optimised to achieve the minimum asymptotic variance for a function of interest among all possible mixtures. The implementation of this iterative scheme is illustrated for the computation of the price of a European option in the Cox-Ingersoll-Ross model
Convergence of adaptive sampling schemes
International audienceIn the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performances of a given kernel can clarify how adequate it is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is quite complex and most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for a wide class of population Monte Carlo algorithms and show that Rao-Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions simply do not benefit from repeated updating
A population Monte Carlo scheme with transformed weights and its application to stochastic kinetic models
This paper addresses the problem of Monte Carlo approximation of posterior
probability distributions. In particular, we have considered a recently
proposed technique known as population Monte Carlo (PMC), which is based on an
iterative importance sampling approach. An important drawback of this
methodology is the degeneracy of the importance weights when the dimension of
either the observations or the variables of interest is high. To alleviate this
difficulty, we propose a novel method that performs a nonlinear transformation
on the importance weights. This operation reduces the weight variation, hence
it avoids their degeneracy and increases the efficiency of the importance
sampling scheme, specially when drawing from a proposal functions which are
poorly adapted to the true posterior.
For the sake of illustration, we have applied the proposed algorithm to the
estimation of the parameters of a Gaussian mixture model. This is a very simple
problem that enables us to clearly show and discuss the main features of the
proposed technique. As a practical application, we have also considered the
popular (and challenging) problem of estimating the rate parameters of
stochastic kinetic models (SKM). SKMs are highly multivariate systems that
model molecular interactions in biological and chemical problems. We introduce
a particularization of the proposed algorithm to SKMs and present numerical
results.Comment: 35 pages, 8 figure
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
- …